172 research outputs found
Learning the Pseudoinverse Solution to Network Weights
The last decade has seen the parallel emergence in computational neuroscience
and machine learning of neural network structures which spread the input signal
randomly to a higher dimensional space; perform a nonlinear activation; and
then solve for a regression or classification output by means of a mathematical
pseudoinverse operation. In the field of neuromorphic engineering, these
methods are increasingly popular for synthesizing biologically plausible neural
networks, but the "learning method" - computation of the pseudoinverse by
singular value decomposition - is problematic both for biological plausibility
and because it is not an online or an adaptive method. We present an online or
incremental method of computing the pseudoinverse, which we argue is
biologically plausible as a learning method, and which can be made adaptable
for non-stationary data streams. The method is significantly more
memory-efficient than the conventional computation of pseudoinverses by
singular value decomposition.Comment: 13 pages, 3 figures; in submission to Neural Network
Neuromorphic Engineering Editors' Pick 2021
This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors
Martingales and the fixation time of evolutionary graphs with arbitrary dimensionality
Evolutionary graph theory (EGT) investigates the Moran birth– death process constrained by graphs. Its two principal goals are to find the fixation probability and time for some initial population of mutants on the graph. The fixation probability of graphs has received considerable attention. Less is known about the distribution of fixation time. We derive clean, exact expressions for the full conditional characteristic functions (CCFs) of a close proxy to fixation and extinction times. That proxy is the number of times that the mutant population size changes before fixation or extinction. We derive these CCFs from a product martingale that we identify for an evolutionary graph with any number of partitions. The existence of that martingale only requires that the connections between those partitions are of a certain type. Our results are the first expressions for the CCFs of any proxy to fixation time on a graph with any number of partitions. The parameter dependence of our CCFs is explicit, so we can explore how they depend on graph structure. Martingales are a powerful approach to study principal problems of EGT. Their applicability is invariant to the number of partitions in a graph, so we can study entire families of graphs simultaneously
Martingales and the characteristic functions of absorption time on bipartite graphs
Evolutionary graph theory investigates how spatial constraints affect processes that model evolutionary selection, e.g. the Moran process. Its principal goals are to find the fixation probability and the conditional distributions of fixation time, and show how they are affected by different graphs that impose spatial constraints. Fixation probabilities have generated significant attention, but much less is known about the conditional time distributions, even for simple graphs. Those conditional time distributions are difficult to calculate, so we consider a close proxy to it: the number of times the mutant population size changes before absorption. We employ martingales to obtain the conditional characteristic functions (CCFs) of that proxy for the Moran process on the complete bipartite graph. We consider the Moran process on the complete bipartite graph as an absorbing random walk in two dimensions. We then extend Wald's martingale approach to sequential analysis from one dimension to two. Our expressions for the CCFs are novel, compact, exact, and their parameter dependence is explicit. We show that our CCFs closely approximate those of absorption time. Martingales provide an elegant framework to solve principal problems of evolutionary graph theory. It should be possible to extend our analysis to more complex graphs than we show here
A compact aVLSI conductance-based silicon neuron
We present an analogue Very Large Scale Integration (aVLSI) implementation
that uses first-order lowpass filters to implement a conductance-based silicon
neuron for high-speed neuromorphic systems. The aVLSI neuron consists of a soma
(cell body) and a single synapse, which is capable of linearly summing both the
excitatory and inhibitory postsynaptic potentials (EPSP and IPSP) generated by
the spikes arriving from different sources. Rather than biasing the silicon
neuron with different parameters for different spiking patterns, as is
typically done, we provide digital control signals, generated by an FPGA, to
the silicon neuron to obtain different spiking behaviours. The proposed neuron
is only ~26.5 um2 in the IBM 130nm process and thus can be integrated at very
high density. Circuit simulations show that this neuron can emulate different
spiking behaviours observed in biological neurons.Comment: BioCAS-201
A Reconfigurable Mixed-signal Implementation of a Neuromorphic ADC
We present a neuromorphic Analogue-to-Digital Converter (ADC), which uses
integrate-and-fire (I&F) neurons as the encoders of the analogue signal, with
modulated inhibitions to decohere the neuronal spikes trains. The architecture
consists of an analogue chip and a control module. The analogue chip comprises
two scan chains and a twodimensional integrate-and-fire neuronal array.
Individual neurons are accessed via the chains one by one without any encoder
decoder or arbiter. The control module is implemented on an FPGA (Field
Programmable Gate Array), which sends scan enable signals to the scan chains
and controls the inhibition for individual neurons. Since the control module is
implemented on an FPGA, it can be easily reconfigured. Additionally, we propose
a pulse width modulation methodology for the lateral inhibition, which makes
use of different pulse widths indicating different strengths of inhibition for
each individual neuron to decohere neuronal spikes. Software simulations in
this paper tested the robustness of the proposed ADC architecture to fixed
random noise. A circuit simulation using ten neurons shows the performance and
the feasibility of the architecture.Comment: BioCAS-201
Single-bit-per-weight deep convolutional neural networks without batch-normalization layers for embedded systems
Batch-normalization (BN) layers are thought to be an integrally important
layer type in today's state-of-the-art deep convolutional neural networks for
computer vision tasks such as classification and detection. However, BN layers
introduce complexity and computational overheads that are highly undesirable
for training and/or inference on low-power custom hardware implementations of
real-time embedded vision systems such as UAVs, robots and Internet of Things
(IoT) devices. They are also problematic when batch sizes need to be very small
during training, and innovations such as residual connections introduced more
recently than BN layers could potentially have lessened their impact. In this
paper we aim to quantify the benefits BN layers offer in image classification
networks, in comparison with alternative choices. In particular, we study
networks that use shifted-ReLU layers instead of BN layers. We found, following
experiments with wide residual networks applied to the ImageNet, CIFAR 10 and
CIFAR 100 image classification datasets, that BN layers do not consistently
offer a significant advantage. We found that the accuracy margin offered by BN
layers depends on the data set, the network size, and the bit-depth of weights.
We conclude that in situations where BN layers are undesirable due to speed,
memory or complexity costs, that using shifted-ReLU layers instead should be
considered; we found they can offer advantages in all these areas, and often do
not impose a significant accuracy cost.Comment: 8 pages, published IEEE conference pape
- …